24 research outputs found

    Decentralized Machine Learning for Intelligent Health Care Systems on the Computing Continuum

    Full text link
    The introduction of electronic personal health records (EHR) enables nationwide information exchange and curation among different health care systems. However, the current EHR systems do not provide transparent means for diagnosis support, medical research or can utilize the omnipresent data produced by the personal medical devices. Besides, the EHR systems are centrally orchestrated, which could potentially lead to a single point of failure. Therefore, in this article, we explore novel approaches for decentralizing machine learning over distributed ledgers to create intelligent EHR systems that can utilize information from personal medical devices for improved knowledge extraction. Consequently, we proposed and evaluated a conceptual EHR to enable anonymous predictive analysis across multiple medical institutions. The evaluation results indicate that the decentralized EHR can be deployed over the computing continuum with reduced machine learning time of up to 60% and consensus latency of below 8 seconds

    VM Image Repository and Distribution Models for Federated Clouds: State of the Art, Possible Directions and Open Issues

    Get PDF
    The emerging trend of Federated Cloud models enlist virtualization as a significant concept to offer a large scale distributed Infrastructure as a Service collaborative paradigm to end users. Virtualization leverage Virtual Machines (VM) instantiated from user specific templates labelled as VM Images (VMI). To this extent, the rapid provisioning of VMs with varying user requests ensuring Quality of Service (QoS) across multiple cloud providers largely depends upon the image repository architecture and distribution policies. We discuss the possible state-of-art in VMI storage repository and distribution mechanisms for efficient VM provisioning in federated clouds. In addition, we present and compare various representative systems in this realm. Furthermore, we define a design space, identify current limitations, challenges and open trends for VMI repositories and distribution techniques within federated infrastructure

    A Two-Stage Multi-Objective Optimization of Erasure Coding in Overlay Networks

    Get PDF
    In the recent years, overlay networks have emerged as a crucial platform for deployment of various distributed applications. Many of these applications rely on data redundancy techniques, such as erasure coding, to achieve higher fault tolerance. However, erasure coding applied in large scale overlay networks entails various overheads in terms of storage, latency and data rebuilding costs. These overheads are largely attributed to the selected erasure coding scheme and the encoded chunk placement in the overlay network. This paper explores a multi-objective optimization approach for identifying appropriate erasure coding schemes and encoded chunk placement in overlay networks. The uniqueness of our approach lies in the consideration of multiple erasure coding objectives such as encoding rate and redundancy factor, with overlay network performance characteristics like storage consumption, latency and system reliability. Our approach enables a variety of tradeoff solutions with respect to these objectives to be identified in the form of a Pareto front. To solve this problem, we propose a novel two stage multiobjective evolutionary algorithm, where the first stage determines the optimal set of encoding schemes, while the second stage optimizes placement of the corresponding encoded data chunks in overlay networks of varying sizes. We study the performance of our method by generating and analyzing the Pareto optimal sets of tradeoff solutions. Experimental results demonstrate that the Pareto optimal set produced by our multi-objective approach includes and even dominates the chunk placements delivered by a related state-of-the-art weighted sum method

    Modular router architecture for high-performance interconnection networks

    Get PDF
    Usmjerivači (ruteri) velikog kapaciteta su temeljni moduli mreža za široku međupovezanost sustava u računalnim sustavima velikog kapaciteta. Kolektivnom interakcijom oni osiguravaju pouzdanu komunikaciju između računalnih čvorova i upravljaju komunikacijskim protokom podataka. Postupak razvoja specijalizirane arhitekture usmjerivača vrlo je složen i zahtijeva razmatranje mnogih čimbenika. Arhitektura usmjerivača velikog kapaciteta uvelike ovisi o mehanizmu za reguliranje protoka budući da on upravlja načinom na koji se paketi prenose kroz mrežu. U radu se predlaže nova visoko učinkovita arhitektura usmjerivača "Step-Back-On-Blocking".High performance routers are fundamental building blocks of the system wide interconnection networks for high performance computing systems. Through collective interaction they provide reliable communication between the computing nodes and manage the communicational dataflow. The development process of specialized router architecture has high complexity and it requires many factors to be considered. The architecture of the high-performance routers is highly dependent on the flow control mechanism, as it dictates the way in which the packets are transferred through the network. In this paper novel high-performance "Step-Back-On-Blocking" router architecture has been proposed

    Resource Management Optimization in Multi-Processor Platforms

    Get PDF
    Proceedings of: Third International Workshop on Sustainable Ultrascale Computing Systems (NESUS 2016). Sofia (Bulgaria), October, 6-7, 2016.The modern high-performance computing systems (HPCS) are composed of hundreds of thousand computational nodes. An effective resource allocation in HPCS is a subject for many scientific research investigations. Many programming models for effective resources allocation have been proposed. The main purpose of those models is to increase the parallel performance of the HPCS. This paper investigates the efficiency of parallel algorithm for resource management optimization based on Artificial Bee Colony (ABC) metaheuristic while solving a package of NP-complete problems on multi-processor platform.In order to achieve minimal parallelization overhead in each cluster node, a multi-level hybrid programming model is proposed that combines coarse-grain and fine-grain parallelism. Coarse-grain parallelism is achieved through domain decomposition by message passing among computational nodes using Message Passing Interface (MPI) and fine-grain parallelism is obtained by loop-level parallelism inside each computation node by compiler-based thread parallelization via Intel TBB. Parallel communications profiling is made and parallel performance parameters are evaluated on the basis of experimental results

    Towards an Environment for Efficient and Transparent Virtual Machine Operations: The ENTICE Approach

    Get PDF
    Cloud computing is based on Virtual Machines (VM) or containers, which provide their own software execution environment that can be deployed by facilitating technologies on top of various physical hardware. The use of VMs or containers represents an efficient way to automatize the overall software engineering and operation life-cycle. Some of the benefits include elasticity and high scalability, which increases the utilization efficiency and decreases the operational costs. VMs or containers as software artifacts are created using provider-specific templates and are stored in proprietary or public repositories for further use. However, technology specific choices may reduce their portability, lead to a vendor lock-in, particularly when applications need to run in federated Clouds. In this paper we present the current state of development of the novel concept of a VM repository and operational environment for federated Clouds named ENTICE. The ENTICE environment has been designed to receive unmodified and functionally complete VM images from its users, and transparently tailor and optimise them for specific Cloud infrastructures with respect to their size, configuration, and geographical distribution, such that they are loaded, delivered, and executed faster and with improved QoS compared to their current behaviour. Furthermore, in this work a specific use case scenario for the ENTICE environment has been provided and the underlying novel technologies have been presented
    corecore